4 research outputs found
Wronging a Right: Generating Better Errors to Improve Grammatical Error Detection
Grammatical error correction, like other machine learning tasks, greatly
benefits from large quantities of high quality training data, which is
typically expensive to produce. While writing a program to automatically
generate realistic grammatical errors would be difficult, one could learn the
distribution of naturallyoccurring errors and attempt to introduce them into
other datasets. Initial work on inducing errors in this way using statistical
machine translation has shown promise; we investigate cheaply constructing
synthetic samples, given a small corpus of human-annotated data, using an
off-the-rack attentive sequence-to-sequence model and a straight-forward
post-processing procedure. Our approach yields error-filled artificial data
that helps a vanilla bi-directional LSTM to outperform the previous state of
the art at grammatical error detection, and a previously introduced model to
gain further improvements of over 5% score. When attempting to
determine if a given sentence is synthetic, a human annotator at best achieves
39.39 score, indicating that our model generates mostly human-like
instances.Comment: Accepted as a short paper at EMNLP 201
Learning from Demonstration in the Wild
Learning from demonstration (LfD) is useful in settings where hand-coding
behaviour or a reward function is impractical. It has succeeded in a wide range
of problems but typically relies on manually generated demonstrations or
specially deployed sensors and has not generally been able to leverage the
copious demonstrations available in the wild: those that capture behaviours
that were occurring anyway using sensors that were already deployed for another
purpose, e.g., traffic camera footage capturing demonstrations of natural
behaviour of vehicles, cyclists, and pedestrians. We propose Video to Behaviour
(ViBe), a new approach to learn models of behaviour from unlabelled raw video
data of a traffic scene collected from a single, monocular, initially
uncalibrated camera with ordinary resolution. Our approach calibrates the
camera, detects relevant objects, tracks them through time, and uses the
resulting trajectories to perform LfD, yielding models of naturalistic
behaviour. We apply ViBe to raw videos of a traffic intersection and show that
it can learn purely from videos, without additional expert knowledge.Comment: Accepted to the IEEE International Conference on Robotics and
Automation (ICRA) 2019; extended version with appendi
Learning from demonstration in the wild
Learning from demonstration (LfD) is useful in settings where hand-coding behaviour or a reward function is impractical. It has succeeded in a wide range of problems but typically relies on manually generated demonstrations or specially deployed sensors and has not generally been able to leverage the copious demonstrations available in the wild: those that capture behaviours that were occurring anyway using sensors that were already deployed for another purpose, e.g., traffic camera footage capturing demonstrations of natural behaviour of vehicles, cyclists, and pedestrians. We propose video to behaviour (ViBe), a new approach to learn models of behaviour from unlabelled raw video data of a traffic scene collected from a single, monocular, initially uncalibrated camera with ordinary resolution. Our approach calibrates the camera, detects relevant objects, tracks them through time, and uses the resulting trajectories to perform LfD, yielding models of naturalistic behaviour. We apply ViBe to raw videos of a traffic intersection and show that it can learn purely from videos, without additional expert knowledge.Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.Interactive Intelligenc